Chromosome analysis is essential for diagnosing genetic disorders. For hematologic malignancies, identification of somatic clonal aberrations by karyotype analysis remains the standard of care. However, karyotyping is costly and time-consuming because of the largely manual process and the expertise required in identifying and annotating aberrations. Efforts to automate karyotype analysis to date fell short in aberration detection. Using a training set of ~10k patient specimens and ~50k karyograms from over 5 years from the Fred Hutchinson Cancer Center, we created a labeled set of images representing individual chromosomes. These individual chromosomes were used to train and assess deep learning models for classifying the 24 human chromosomes and identifying chromosomal aberrations. The top-accuracy models utilized the recently introduced Topological Vision Transformers (TopViTs) with 2-level-block-Toeplitz masking, to incorporate structural inductive bias. TopViT outperformed CNN (Inception) models with >99.3% accuracy for chromosome identification, and exhibited accuracies >99% for aberration detection in most aberrations. Notably, we were able to show high-quality performance even in "few shot" learning scenarios. Incorporating the definition of clonality substantially improved both precision and recall (sensitivity). When applied to "zero shot" scenarios, the model captured aberrations without training, with perfect precision at >50% recall. Together these results show that modern deep learning models can approach expert-level performance for chromosome aberration detection. To our knowledge, this is the first study demonstrating the downstream effectiveness of TopViTs. These results open up exciting opportunities for not only expediting patient results but providing a scalable technology for early screening of low-abundance chromosomal lesions.
translated by 谷歌翻译
我们可以训练一个能够处理多个模态和数据集的单个变压器模型,同时分享几乎所有的学习参数?我们呈现Polyvit,一种培训的模型,在图像,音频和视频上接受了讲述这个问题。通过在单一的方式上培训不同的任务,我们能够提高每个任务的准确性,并在5个标准视频和音频分类数据集中实现最先进的结果。多种模式和任务上的共同训练Polyvit会导致一个更具参数效率的模型,并学习遍历多个域的表示。此外,我们展示了实施的共同培训和实用,因为我们不需要调整数据集的每个组合的超级参数,但可以简单地调整来自标准的单一任务培训。
translated by 谷歌翻译
在本文中,据我们所知,我们提供了将各种掩盖机制纳入变形金刚以可扩展方式融入变形金刚结构的第一种综合方法。我们表明,有关线性因果关注的最新结果(Choromanski等,2021)和对数线性RPE注意力(Luo等,2021)是这种一般机制的特殊情况。但是,通过将问题作为对未掩盖注意力的拓扑调制(基于图的)调制,我们以前获得了几个未知结果,包括有效的D维RPE掩盖和图形内掩蔽。我们利用许多数学技术,从光谱分析到动态编程和随机步行到新算法,以求解图形上的马尔可夫过程。我们提供相应的经验评估。
translated by 谷歌翻译
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attentionkernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can also be used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
translated by 谷歌翻译
Determining and predicting reservoir formation properties for newly drilled wells represents a significant challenge. One of the variations of these properties evaluation is well-interval similarity. Many methodologies for similarity learning exist: from rule-based approaches to deep neural networks. Recently, articles adopted, e.g. recurrent neural networks to build a similarity model as we deal with sequential data. Such an approach suffers from short-term memory, as it pays more attention to the end of a sequence. Neural network with Transformer architecture instead cast their attention over all sequences to make a decision. To make them more efficient in terms of computational time, we introduce a limited attention mechanism similar to Informer and Performer architectures. We conduct experiments on open datasets with more than 20 wells making our experiments reliable and suitable for industrial usage. The best results were obtained with our adaptation of the Informer variant of Transformer with ROC AUC 0.982. It outperforms classical approaches with ROC AUC 0.824, Recurrent neural networks with ROC AUC 0.934 and straightforward usage of Transformers with ROC AUC 0.961.
translated by 谷歌翻译
石油和天然气行业中的相似性学习问题旨在构建一个模型,该模型估算以记录数据的间隔测量之间的相似性。以前的尝试主要基于经验规则,因此我们的目标是自动化此过程并排除昂贵且耗时的专家标签。相似性学习的方法之一是自学学习(SSL)。与监督范式相反,该数据几乎不需要标签。因此,即使缺乏或稀缺,我们也可以学习此类模型。如今,大多数SSL方法都是对比和非对抗性的。但是,由于可能对正和负样本进行错误的标记,对比度方法的扩展并不能很好地扩展到对象的数量。非对比度方法不依赖负样本。这种方法在计算机视觉中积极使用。我们为时间序列数据引入了非对比度SSL。特别是,我们建立在Byol和Barlow双胞胎方法的基础上,这些方法避免使用负对,仅专注于匹配正对。这些方法的关键部分是增强策略。存在时间序列的不同增强,而它们对性能的影响可能是正面的和负面的。我们对BYOL和BARLOW双胞胎的增强策略和适应性,使我们能够比其他自我监督的方法(仅ARI $ = 0.34 $)实现更高的质量(ARI $ = 0.49 $),证明了拟议中的非对比性自我的有用性间隔相似性问题和时间序列表示总体学习的监督方法。
translated by 谷歌翻译
图像栅格化是计算机图形中的一种成熟技术,而图像矢量化(栅格化的反向路径)仍然是一个重大挑战。最近的先进的基于深度学习的模型实现了向量图的矢量化和语义插值,并证明了生成新数字的更好拓扑。但是,深层模型不能轻易推广到室外测试数据。生成的SVG还包含复杂而冗余的形状,这些形状并不是进一步编辑的方便。具体而言,图像中关键的层拓扑和基本语义仍然没有很好地理解,因此尚未完全探索。在这项工作中,我们提出了层次图像矢量化,即现场,以将栅格图像转换为SVG并同时维护其图像拓扑。 Live可以产生紧凑的SVG形式,具有与人类视角一致的层结构。我们逐步添加新的bezier路径,并通过层面框架,新设计的损耗功能和组件途径初始化技术优化这些路径。我们的实验表明,与先前的作品相比,Live呈现出更合理的矢量形式,并且可以推广到新图像。在这个新知识的拓扑结构的帮助下,Live为设计师和其他下游应用程序启动了人类可编辑的SVG。代码可在https://github.com/picsart-ai-research/live-layerwise-image-vectorization上找到。
translated by 谷歌翻译